Goto

Collaborating Authors

 sf network



Complexity of Activity Patterns in a Bio-Inspired Hopfield-Type Network in Different Topologies

Cafiso, Marco, Paradisi, Paolo

arXiv.org Artificial Intelligence

Neural network models capable of storing memory have been extensively studied in computer science and computational neuroscience. The Hopfield network is a prototypical example of a model designed for associative, or content-addressable, memory and has been analyzed in many forms. Further, ideas and methods from complex network theory have been incorporated into artificial neural networks and learning, emphasizing their structural properties. Nevertheless, the temporal dynamics also play a vital role in biological neural networks, whose temporal structure is a crucial feature to examine. Biological neural networks display complex intermittency and, thus, can be studied through the lens of the temporal complexity (TC) theory. The TC approach look at the metastability of self-organized states, characterized by a power-law decay in the inter-event time distribution and in the total activity distribution or a scaling behavior in the corresponding event-driven diffusion processes. In this study, we present a temporal complexity (TC) analysis of a biologically-inspired Hopfield-type neural network model. We conducted a comparative assessment between scale-free and random network topologies, with particular emphasis on their global activation patterns. Our parametric analysis revealed comparable dynamical behaviors across both neural network architectures. Furthermore, our investigation into temporal complexity characteristics uncovered that seemingly distinct dynamical patterns exhibit similar temporal complexity behaviors. In particular, similar power-law decay in the activity distribution and similar complexity levels are observed in both topologies, but with a much reduced noise in the scale-free topology. Notably, most of the complex dynamical profiles were consistently observed in scale-free network configurations, thus confirming the crucial role of hubs in neural network dynamics.


A Proof of Theorem 1 Proof. null null null null null null loss

Neural Information Processing Systems

We consider four types of benchmarks in our experiments, i.e., random COPs, scale-free networks, Therefore, to be fair and practical, we do not consider existing DNN-based BP variants. We compare our DABP with the following state-of-the-art COP solvers: (1) DBP with a damping factor of 0.9 and its splitting constraint factor graph version (DBP-SCFG) with a splitting ratio of 0.95 [ However, it can be extremely tedious and time-consuming to tune the damping factor. Figure 9: Convergence rates under different iteration limits ( | X | = 100) 4 size (i.e., 5) and the constraint functions are highly structured, which allows effective pruning and It also can be concluded that our DABP converges much faster than DBP and DBP-SCFG. To demonstrate the necessity of heterogeneous hyperparameters of Eq. (6), we conduct extensive Figure 1-12 present the results on solution quality. Table 1 presents the GPU memory footprint of our DABP .


Generalized dimension reduction approach for heterogeneous networked systems with time-delay

Ma, Cheng, Korniss, Gyorgy, Szymanski, Boleslaw K., Gao, Jianxi

arXiv.org Artificial Intelligence

Networks of interconnected agents are essential to study complex networked systems' state evolution, stability, resilience, and control. Nevertheless, the high dimensionality and nonlinear dynamics are vital factors preventing us from theoretically analyzing them. Recently, the dimension-reduction approaches reduced the system's size by mapping the original system to a one-dimensional system such that only one effective representative can capture its macroscopic dynamics. However, the approaches dramatically fail as the network becomes heterogeneous and has multiple community structures. Here, we bridge the gap by developing a generalized dimension reduction approach, which enables us to map the original system to a $m$-dimensional system that consists of $m$ interacting components. Notably, by validating it on various dynamical models, this approach accurately predicts the original system state and the tipping point, if any. Furthermore, the numerical results demonstrate that this approach approximates the system evolution and identifies the critical points for complex networks with time delay.


Deep Attentive Belief Propagation: Integrating Reasoning and Learning for Solving Constraint Optimization Problems

Deng, Yanchen, Kong, Shufeng, Liu, Caihua, An, Bo

arXiv.org Artificial Intelligence

Belief Propagation (BP) is an important message-passing algorithm for various reasoning tasks over graphical models, including solving the Constraint Optimization Problems (COPs). It has been shown that BP can achieve state-of-the-art performance on various benchmarks by mixing old and new messages before sending the new one, i.e., damping. However, existing methods of tuning a static damping factor for BP not only are laborious but also harm their performance. Moreover, existing BP algorithms treat each variable node's neighbors equally when composing a new message, which also limits their exploration ability. To address these issues, we seamlessly integrate BP, Gated Recurrent Units (GRUs), and Graph Attention Networks (GATs) within the message-passing framework to reason about dynamic weights and damping factors for composing new BP messages. Our model, Deep Attentive Belief Propagation (DABP), takes the factor graph and the BP messages in each iteration as the input and infers the optimal weights and damping factors through GRUs and GATs, followed by a multi-head attention layer. Furthermore, unlike existing neural-based BP variants, we propose a novel self-supervised learning algorithm for DABP with a smoothed solution cost, which does not require expensive training labels and also avoids the common out-of-distribution issue through efficient online learning. Extensive experiments show that our model significantly outperforms state-of-the-art baselines.


Meta Reinforcement Learning with Successor Feature Based Context

Han, Xu, Wu, Feng

arXiv.org Artificial Intelligence

Most reinforcement learning (RL) methods only focus on learning a single task from scratch and are not able to use prior knowledge to learn other tasks more effectively. Context-based meta RL techniques are recently proposed as a possible solution to tackle this. However, they are usually less efficient than conventional RL and may require many trial-and-errors during training. To address this, we propose a novel meta-RL approach that achieves competitive performance comparing to existing meta-RL algorithms, while requires significantly fewer environmental interactions. By combining context variables with the idea of decomposing reward in successor feature framework, our method does not only learn high-quality policies for multiple tasks simultaneously but also can quickly adapt to new tasks with a small amount of training. Compared with state-of-the-art meta-RL baselines, we empirically show the effectiveness and data efficiency of our method on several continuous control tasks.